Use of incrementally regulated discriminative margins in MCE training for speech recognition

نویسندگان

  • Dong Yu
  • Li Deng
  • Xiaodong He
  • Alex Acero
چکیده

In this paper, we report our recent development of a novel discriminative learning technique which embeds the concept of discriminative margin into the well established minimum classification error (MCE) method. The idea is to impose an incrementally adjusted “margin” in the loss function of MCE algorithm so that not only error rates are minimized but also discrimination “robustness” between training and test sets is maintained. Experimental evaluation shows that the use of the margin improves a state-of-the-art MCE method by reducing 17% digit errors and 19% string errors in the TIDigits recognition task. The string error rate of 0.55% and digit error rate of 0.19% we have obtained are the best-ever results reported on this task in the literature.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Large-margin minimum classification error training: A theoretical risk minimization perspective

Large-margin discriminative training of hidden Markov models has received significant attention recently. A natural and interesting question is whether the existing discriminative training algorithms can be extended directly to embed the concept of margin. In this paper, we give this question an affirmative answer by showing that the sigmoid bias in the conventional minimum classification error...

متن کامل

Selective MCE training strategy in Mandarin speech recognition

The use of discriminative training methods in speech recognition is a promising approach. The minimum classification error (MCE) based discriminative methods have been extensively studied and successfully applied to speech recognition [1][2][3], speaker recognition [4], and utterance verification [5][6]. Our goal is to modify the embedded string model based MCE algorithm to train a large number...

متن کامل

Investigations on error minimizing training criteria for discriminative training in automatic speech recognition

Discriminative training criteria have been shown to consistently outperform maximum likelihood trained speech recognition systems. In this paper we employ the Minimum Classification Error (MCE) criterion to optimize the parameters of the acoustic model of a large scale speech recognition system. The statistics for both the correct and the competing model are solely collected on word lattices wi...

متن کامل

Improved performance and generalization of minimum classification error training for continuous speech recognition

Discriminative training of hidden Markov models (HMMs) using segmental minimum classi cation error (MCE) training has been shown to work extremely well for certain speech recognition applications. It is, however, somewhat prone to overspecialization. This study investigates various techniques which improve performance and generalization of the MCE algorithm. Improvements of up to 7% in relative...

متن کامل

A new look at discriminative training for hidden Markov models

Discriminative training for hidden Markov models (HMMs) has been a central theme in speech recognition research for many years. One most popular technique is minimum classification error (MCE) training, with the objective function closely related to the empirical error rate and with the optimization method based traditionally on gradient descent. In this paper, we provide a new look at the MCE ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006